70 research outputs found

    Editorial: Perceiving and Acting in the real world: from neural activity to behavior

    Get PDF
    The interaction between perception and action represents one of the pillars of human evolutionary success. Our interactions with the surrounding world involve a variety of behaviors, almost always including movements of the eyes and hands. Such actions rely on neural mechanisms that must process an enormous amount of information in order to generate appropriate motor commands. Yet, compared to the great advancements in the field of perception for cognition, the neural underpinnings of how we control our movements, as well as the interactions between perception and motor control, remain elusive. With this research topic we provide a framework for: 1) the perception of real objects and shapes using visual and haptic information, 2) the reference frames for action and perception, and 3) how perceived target properties are translated into goal-directed actions and object manipulation. The studies in this special issue employ a variety of methodologies that include behavioural kinematics, neuroimaging, transcranial magnetic stimulation and patient cases. Here we provide a brief summary and commentary on the articles included in this research topic

    Familiar size effects on reaction time: When congruent is better

    Get PDF
    Familiar size is known to influence our perception of object’s size and distance. In this study, we examined whether or not simple RTs (RTs) are also affected by prior knowledge of objects’ size. In a series of experiments, participants were asked to respond as quickly as possible to briefly presented images of familiar objects, equated for luminance and retinal size. The effects of familiar size and object animacy on RTs were investigated under natural (Experiment 1) and reduced (Experiment 2) viewing conditions. Restricted viewing conditions were introduced to manipulate the availability of depth cues. A systematic effect of familiar size on RTs was considered for progressively “shrunken” (Experiment 3) and “enlarged” (Experiment 4) objects on the screen with respect to their familiar size. Measures of perceived size were also taken by means of a manual estimation task (Experiment 5). Results showed an effect of animacy on simple RTs: Participants were faster to respond to images of animals than nonanimals. An effect of familiar size on simple RTs was also observed under reduced viewing conditions only: Objects shown closer to their real-world size were detected significantly more quickly than those further from their familiar size. However, this familiar-size advantage did not reflect perceived size. Hence, simple RTs under reduced viewing conditions are modulated by the degree of compatibility between physical size and long-term representations of size. (PsycINFO Database Record (c) 2018 APA, all rights reserved

    Proprioceptive Distance Cues Restore Perfect Size Constancy in Grasping, but Not Perception, When Vision Is Limited

    Get PDF
    Our brain integrates information from multiple modalities in the control of behavior. When information from one sensory source is compromised, information from another source can compensate for the loss. What is not clear is whether the nature of this multisensory integration and the re-weighting of different sources of sensory information are the same across different control systems. Here, we investigated whether proprioceptive distance information (position sense of body parts) can compensate for the loss of visual distance cues that support size constancy in perception (mediated by the ventral visual stream) [1, 2] versus size constancy in grasping (mediated by the dorsal visual stream) [3–6], in which the real-world size of an object is computed despite changes in viewing distance. We found that there was perfect size constancy in both perception and grasping in a full-viewing condition (lights on, binocular viewing) and that size constancy in both tasks was dramatically disrupted in the restricted-viewing condition (lights off; monocular viewing of the same but luminescent object through a 1-mm pinhole). Importantly, in the restricted-viewing condition, proprioceptive cues about viewing distance originating from the non-grasping limb (experiment 1) or the inclination of the torso and/or the elbow angle of the grasping limb (experiment 2) compensated for the loss of visual distance cues to enable a complete restoration of size constancy in grasping but only a modest improvement of size constancy in perception. This suggests that the weighting of different sources of sensory information varies as a function of the control system being used

    Susceptibility to optical illusions varies as a function of the autism-spectrum quotient but not in ways predicted by local–global biases

    Get PDF
    Individuals with autism spectrum disorder and those with autistic tendencies in non-clinical groups are thought to have a perceptual style privileging local details over global integration. We used 13 illusions to investigate this perceptual style in typically developing adults with various levels of autistic traits. Illusory susceptibility was entered into a principal-component analysis. Only one factor, consisting of the Shepard’s tabletops and Square-diamond illusions, was found to have reduced susceptibility as a function of autistic traits. Given that only two illusions were affected and that these illusions depend mostly on the processing of within-object relational properties, we conclude there is something distinct about autistic-like perceptual functioning but not in ways predicted by a preference of local over global elements

    The contribution of linear perspective cues and texture gradients in the perceptual rescaling of stimuli inside a Ponzo illusion corridor

    Get PDF
    We examined the influence of linear perspective cues and texture gradients in the perceptual rescaling of stimuli over a highly-salient Ponzo illusion of a corridor. We performed two experiments using the Method of Constant Stimuli where participants judged the size of one of two rings. In experiment 1, one ring was presented in the upper visual-field at the end of the corridor and the other in the lower visual-field at the front of the corridor. The perceived size of the top and bottom rings changed as a function of the availability of linear perspective and textures. In experiment 2, only one ring was presented either at the top or the bottom of the image. The perceived size of the top but not the bottom ring changed as a function of the availability of linear perspective and textures. In both experiments, the effects of the cues were additive. Perceptual rescaling was also stronger for the top compared to the bottom ring. Additional eye-tracking revealed that participants tended to gaze more in the upper than the lower visual-field. These findings indicate that top-down mechanisms provide an important contribution to the Ponzo illusion. Nonetheless, additional maximum likelihood estimation analyses revealed that linear perspective fulfilled a greater contribution in experiment 2, which is suggestive of a bottom-up mechanism. We conclude that both top-down and bottom-up mechanisms play important roles. However, the former seems to fulfil a more prominent role when both stimuli are presented in the image

    The Shepard Illusion Is Reduced in Children With an Autism Spectrum Disorder Because of Perceptual Rather Than Attentional Mechanisms

    Get PDF
    Earlier studies demonstrate reduced illusion strength in the Shepard illusion in adults and adolescents with an autism spectrum disorder (ASD) and in typically developing (TD) adults with high levels of autistic traits. We measured the strength of the Shepard illusion in ASD and TD children and tested if ten different eye-tracking measurements could predict group differences in illusion strength. The ASD children demonstrated reduced illusion strength relative to the TD group. Despite this, there were no mean differences on any of the eye-tracking measurements between groups. Even though none of the eye-tracking measurements revealed mean differences between the two groups, the degree to which spatial attention was directed toward the standard stimulus, as indexed by the number of saccades within and toward this stimulus, predicted the strength of the illusion in the overall sample. Furthermore, this active scanning of the standard stimulus was found to enhance illusion strength more strongly in the ASD than the TD group. Together, we conclude that scan patterns and the degree to which participants are able to shift between different locations in a visual scene did not account for group differences in illusion strength. Thus, the reduced strength of the Shepard illusion in ASD does not appear to be driven by how attention shifts or is spatially allocated. Rather, differences may relate instead to perceptual mechanisms that integrate visual information. Strategies that may aid ASD individuals to see this illusion more strongly could have them make even more eye movements within and between the stimuli presented in the illusion display

    Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking

    Get PDF
    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features

    Evaluating bodily self-consciousness and the brain using multisensory perturbation and fMRI

    Get PDF
    In this article, we consider the usefulness of functional magnetic resonance imaging (fMRI) and perturbation in evaluating causal relationships between bodily self-consciousness and the brain. We argue that fMRI research is not always restricted to correlational statements when it is combined with perturbation techniques and can sometimes permit some degree of causal inferencing, such as when bodily illusions are examined with fMRI. In these instances, one is changing a participant’s conscious bodily self by experimentally perturbing mechanisms that are involved in multisensory integration

    Perceptual size discrimination requires awareness and late visual areas: A continuous flash suppression and interocular transfer study

    Get PDF
    We applied continuous flash suppression (CFS) during an interocular transfer paradigm to evaluate the importance of awareness and the contribution of early versus late visual structures in size recognition. Specifically, we tested if size judgements of a visible target could be influenced by a congruent or incongruent prime presented to the same or different eye. Without CFS, participants categorised a target as “small” or “large” more quickly when it was preceded by a congruent prime – regardless of whether the prime and target were presented to the same or different eye. Interocular transfer enabled us to infer that the observed priming was mediated by late visual areas. In contrast, there was no priming under CFS, which underscores the importance of awareness. We conclude that awareness and late visual structures are important for size perception and that any subconscious processing of the stimulus has minimal effect on size recognition

    Temporal features of size constancy for perception and action in a real-world setting: A combined EEG-kinematics study

    Get PDF
    A stable representation of object size, in spite of continuous variations in retinal input due to changes in viewing distance, is critical for perceiving and acting in a real 3D world. In fact, our perceptual and visuo-motor systems exhibit size and grip constancies in order to compensate for the natural shrinkage of the retinal image with increased distance. The neural basis of this size-distance scaling remains largely unknown, although multiple lines of evidence suggest that size-constancy operations might take place remarkably early, already at the level of the primary visual cortex. In this study, we examined for the first time the temporal dynamics of size constancy during perception and action by using a combined measurement of event-related potentials (ERPs) and kinematics. Participants were asked to maintain their gaze steadily on a fixation point and perform either a manual estimation or a grasping task towards disks of different sizes placed at different distances. Importantly, the physical size of the target was scaled with distance to yield a constant retinal angle. Meanwhile, we recorded EEG data from 64 scalp electrodes and hand movements with a motion capture system. We focused on the first positive-going visual evoked component peaking at approximately 90 ms after stimulus onset. We found earlier latencies and greater amplitudes in response to bigger than smaller disks of matched retinal size, regardless of the task. In line with the ERP results, manual estimates and peak grip apertures were larger for the bigger targets. We also found task-related differences at later stages of processing from a cluster of central electrodes, whereby the mean amplitude of the P2 component was greater for manual estimation than grasping. Taken together, these findings provide novel evidence that size constancy for real objects at real distances occurs at the earliest cortical stages and that early visual processing does not change as a function of task demands
    • …
    corecore